95 research outputs found

    Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

    Full text link
    Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance on 3D object representation learning for manipulation. We include videos, code, and additional information on the project website - https://kristery.github.io/ILAD/ .Comment: project page: https://kristery.github.io/ILAD

    Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations

    Full text link
    We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand. Our project page is at https://jianglongye.com/cgf .Comment: Project page: https://jianglongye.com/cg

    VoxDet: Voxel Learning for Novel Instance Detection

    Full text link
    Detecting unseen instances based on multi-view templates is a challenging problem due to its open-world nature. Traditional methodologies, which primarily rely on 2D representations and matching techniques, are often inadequate in handling pose variations and occlusions. To solve this, we introduce VoxDet, a pioneer 3D geometry-aware framework that fully utilizes the strong 3D voxel representation and reliable voxel matching mechanism. VoxDet first ingeniously proposes template voxel aggregation (TVA) module, effectively transforming multi-view 2D images into 3D voxel features. By leveraging associated camera poses, these features are aggregated into a compact 3D template voxel. In novel instance detection, this voxel representation demonstrates heightened resilience to occlusion and pose variations. We also discover that a 3D reconstruction objective helps to pre-train the 2D-3D mapping in TVA. Second, to quickly align with the template voxel, VoxDet incorporates a Query Voxel Matching (QVM) module. The 2D queries are first converted into their voxel representation with the learned 2D-3D mapping. We find that since the 3D voxel representations encode the geometry, we can first estimate the relative rotation and then compare the aligned voxels, leading to improved accuracy and efficiency. Exhaustive experiments are conducted on the demanding LineMod-Occlusion, YCB-video, and the newly built RoboTools benchmarks, where VoxDet outperforms various 2D baselines remarkably with 20% higher recall and faster speed. To the best of our knowledge, VoxDet is the first to incorporate implicit 3D knowledge for 2D tasks.Comment: 17 pages, 10 figure

    Zero-shot Pose Transfer for Unrigged Stylized 3D Characters

    Full text link
    Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Furthermore, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at https://jiashunwang.github.io/ZPTComment: CVPR 202

    Antioxidative Effect of Large Molecular Polymeric Pigments Extracted from Zijuan Pu-erh Tea In Vitro and In Vivo

    Get PDF
    ABSTRACT The antioxidative effect of large molecular polymeric pigments (LMPP) extracted from Zijuan Pu-erh tea was investigated in vitro and in vivo. The results showed that LMPP had signifi cant scavenging activities on the hydroxyl radical and the 2, 2-diphenyl-1-picrylhydrazyl (DPPH) radical in vitro and showed strong reducing power. Moreover, the 50% inhibitory concentrations of LMPP for scavenging DPPH radical and hydroxyl radical were 0.217 mg.mL -1 and 0.461 mg.mL -1 , respectively. In vivo, the LMPP-treated rat groups showed signifi cantly increased serum superoxide dismutase (SOD) and glutathione peroxidase (GSH-PX) activities, reduced malondialdehyde (MDA) formation, increased nitric oxide (NO) production and signifi cantly decreased rat endothelin-1(ET-1) concentrations compared with those in the hyperlipidemia model group (P < 0.05). The serum SOD and GSH-PX activities and NO concentration were 66.88, 29.09 and 55.11% higher, respectively, whereas, the serum ET-1 and MDA concentrations were 34.62 and 59.11% lower in the high-dose LMPP treatment group (1.215 g.kg -1 body weight) than in the hyperlipidemia model group (P < 0.05). These results showed that LMPP has a good antioxidative function and can be considered as a natural antioxidant source
    • …
    corecore